Research Statement – Ali Hortasu
Much of my work is focused on
empirically assessing the efficiency of markets. In a nutshell, I utilize
detailed micro-level data from the markets I study to estimate preference and
technology parameters that rationalize individual behavior. I then use the
estimated preferences and technological parameters to construct (constrained) efficient
benchmarks and assess how far observed market outcomes are from efficiency.
This comparison also motivates discussions of how market rules can be altered
to improve efficiency. I have applied the above framework to many market
settings, including financial markets, energy markets, and the Internet, and a
variety of market clearing mechanisms, including auctions, matching, and costly
search.
Multi-Unit Auctions
Perhaps the clearest sustained
strand in my research is my work on markets that clear through multi-unit
auctions. Many important markets clear in this fashion: for example, almost all
developed country governments auction off their debt using a type of multi-unit
auction; most deregulated electricity markets clear using a multi-unit auction;
the European Central Bank runs its monetary policy using such an auction. While
modeling equilibrium strategic behavior in such markets remains an open
challenge, the methods that my co-authors and I have developed demonstrate that
doing empirical work achieving the objectives above is feasible.
My first paper on multi-unit
auctions is titled Mechanism
Choice and Strategic Bidding in Divisible Good Auctions: An Empirical Analysis
of the Turkish Treasury Auction Market, which was my job market paper.
It appeared in the Journal of Political
Economy, with very significant contributions by David McAdams to its final
version. In this paper, we study Milton Friedmans question of what type of
auction format to use to market government securities. Unfortunately, auction
theory does not provide an a priori
answer to this question, beyond providing examples demonstrating that the
revenue ranking of the various mechanisms is ambiguous. The chief innovation of
the paper, therefore, is to utilize individual bidder level data and to recover
bidders true willingness to pay curves that rationalize the observed bids;
under the assumption of best-response behavior. To that end, the paper provides
econometric methods for constructing estimates of bidders preferences. With
the true willingness to pay curves in hand, the paper then constructs the
efficient Vickrey benchmark, and studies how the real-world mechanism departs
from this benchmark. The answer, in this particular setting, was not too much.
My second paper on
this agenda, Understanding
Strategic Models of Bidding in Deregulated Electricity Markets: A Case Study of
ERCOT, studies auctions for real-time electricity procurement in Texas.
This paper conducts the dual exercise of testing whether best-response
behavior is an appropriate assumption to impose on bidders participating in
complex multi-unit auction mechanisms. To do this, Steven Puller and I reconstructed
the marginal costs of electricity production for each of the bidding generators
for every hour. Using these MC estimates, we then constructed best-response
bids (using info available to bidders at the time of bidding) and compared them
to actual bids. What we find is a
large amount of variation across bidders in terms of achieving best-response
behavior. While some bidders were able reap almost all of the achievable best
response profits, others were far less successful. While some of this variation
was ascribable to learning, most of it appeared to be explainable by scale.
Indeed, our field trips to the offices of several of these generation companies
revealed that bidders with more money at stake had hired much more
sophisticated traders who spoke a very similar language to economists, while
the underperformers utilized various heuristics rules-of-thumb (such as
arbitrary mark-ups designed to recoup fixed costs) with very little
consideration of the residual demand they were facing.
Another contribution of this
paper was to show that deviations from best response behavior, even though
localized to smaller bidders, led to significant inefficiencies in this market
when aggregated. Indeed, the production inefficiencies in this market were as
much due to the best-response actions of the more sophisticated bidders as
the non-best response behavior of unsophisticated bidders. While learning and
some market selection and consolidation led to some improvement, such effects
were relatively small.
My next two papers that study
multi-unit auctions were co-authored with Jakub Kastl. The first paper, Valuing Dealers' Informational
Advantage: A Study of Canadian Treasury Auctions, is an
in-depth study of front-running in that securities market. It also provides one
of the first formal tests of the common versus private values assumptions in a
financial market. The paper studies the interaction of primary dealers of
Canadian government securities and their customers, who have to route their
bids through primary dealers. The dataset we obtained tracked the timing of bid
submissions, and importantly, bid modifications by all bidders. Using this
information, we were able to test whether primary dealers modifications of
their bids in response to customer bids were due to learning about the common
value of the security being auctioned, or whether a purely private values model
in which PDs learn about the location and shape of the residual supply curve
from customer bids could explain the data. Based on our tests, we were not able
to reject the null hypothesis of private values in this market. Given our validation of the private value
model, we then embark on the exercise of quantifying the surplus the PDs
derived from observing customers order flow – which we found to be
between 10-30% of dealers profits.
My second paper with Jakub
Kastl, also co-authored with Nuno Cassola, is titled The 2007 Subprime
Market Crisis Through the Lens of European Central Bank Auctions for Short-Term
Funds. In this project, we studied the European Central Banks
weekly refinancing auctions of short-term (weekly) repo loans, which was the
main conduit of monetary policy in the Eurozone.
What we noticed looking at this
data was a sudden and dramatic increase in the bids for ECB loans following
August 2007, as reflective of funding shortages in the inter-bank market.
However, along with the increase in the level of the bids, the dispersion also
increased, suggesting heterogeneity in funding problems across banks. That said,
while some of the observed bid increases may reflect a true and sudden shift in
the underlying WTP for short-term funding, some banks may have started bidding
higher just to remain competitive in the auctions. I.e. the strategic nature of
bids may have masked the true heterogeneity of funding troubles across Eurozone
banks. Thus, our main methodological contribution in the paper was to
dissociate the strategic accommodating component of bids to isolate banks
underlying willingness-to-pay for ECB loans. One interesting result of this
exercise was that the estimated WTPs of the bidders were much better predictors
of balance sheet troubles at the end of 2007 than the bids were, providing
credence to our hypothesis that looking at bids and not at fundamentals masks
the true heterogeneity of the crisis. Another practical offshoot of the
exercise is that the bidding data – when sufficiently filtered to account
for strategic behavior – can provide high-frequency snapshots of the
short-term funding rates faced by individual banks in the Eurozone. Since interbank
markets are famously opaque, and published rates based on surveys like the LIBOR
and EURIBOR are famously corruptible, our bank-level barometer of short term
funding rates may be an important temperature gauge for use by Europes central
bankers.
Internet Marketplaces
Along with financial and energy
markets, I also have written several papers studying Internet marketplaces. Winners
Curse, Reserve Prices, and Endogenous Entry with Pat Bajari was one
of the first papers written on online auctions, and Matching
and Sorting in Online Dating is one of the first papers
written on online dating.
The
eBay paper lays down a tractable theoretical framework to study eBay and other
– often very similar – online auctions. In particular, it provided
an explanation for sniping through an interdependent values framework, and also
allowed for stochastic entry by participants. The paper then structurally
estimates the common value model, and studies the optimality of sellers
reserve prices (from the profit maximization point of view).
I have
written two further papers on eBay: Dynamics
of Seller Reputation: Evidence from eBay with Luis Cabral
focuses on eBays reputation system, and provides empirical evidence of
opportunistic behavior by sellers in response to the incentives provided by
the reputation mechanism. The other paper, The
Geography of Trade on eBay and Mercado Libre, with
Asis Martinez-Jerez and Jason Douglas, studies the geographic reach of eBay
transactions, under the null hypothesis that the world is flat, especially on
the Internet. Our analysis reveals that while trade on the Internet is much
less distance dependent than offline, a form of home bias still persists in that
a very significant fraction of transactions lie within the same-city limits.
Analyzing the prevalence of this across various product categories suggests
that the main culprit for the same-city effect may be the lack of trust. Presumably,
eBay buyers prefer to kick the tires of their purchases or feel more
comfortable when the seller is nearby, should a return be necessary.
My work on online dating,
follows the same framework outlined in the beginning of this statement. In What
Makes You Click? Mate Preferences in Online Dating we
utilize detailed data on user actions on a large online dating site to estimate
online daters preferences over a large set of dating partner preferences,
including measures of physical attractiveness. While many of our qualitative
findings might not be that surprising, our preference estimates allow us to
report the trade-offs between different attributes – i.e., how much
education/income compensates for physical attractiveness/racial differences.
Our second paper on the topic Matching
and Sorting in Online Dating, utilized these preference
estimates to answer the following questions: (1) how efficient is the dating
website compared to matches that would have been proposed by (e.g.)
Gale-Shapley if they had been equipped with the preference profiles we
estimated for each user, and (2) can the various and commonly documented
sorting/homogeny patterns in U.S. marriages be explained by preferences alone
– and not by other factors such as search frictions. The answer to (1) is
that we achieved allocations that were quite similar to Gale-Shapley stable
matches. As for (2), the answer is that preferences alone (scaled up to
represent the U.S.) indeed would explain a very large fraction of the sorting
patterns we see in the offline world, casting some doubt into the significance
of other factors such as search frictions.
Product Markets with Search and
Information Frictions
Yet another research agenda I
have tried to advance is to empirically study product markets with search and
information frictions. My first paper on this was Product
Differentiation, Search Costs and Competition in the Mutual Fund Industry, with
Chad Syverson, where we studied the market for Standard & Poors 500 index
funds. To our astonishment, we found very large (almost 30-fold) price
differentials in this market, where price reflects fees paid to money
managers. To explain the nature and evolution of price dispersion, we developed
a model of utility search by individual investors, which took vertical product
differentiation factors into account. We then show that the distribution of
search costs rationalizing the observed prices and market shares can be
nonparametrically identified from the data, taking into account vertical
product differentiation across various funds. Using our estimates of search
costs, we found that search costs for investors in the higher quantiles of the
search costs distribution increased in the Internet bubble era, which may be
rationalized by the fact that a large number of new and inexperienced investors
were attracted to invest in the stock market.
In
a recent paper titled Testing Models of
Consumer Search Using Data on Web Browsing and Purchasing Behavior,
co-authored with my former Ph.D. student Babur de los Santos and Matthijs Wildenbeest,
we use a very detailed web browsing and purchasing data set to study how people
shop for books on the Internet. To our knowledge, aside from a number of
laboratory experiments, this is one of the first papers that uses data on actual search behavior by consumers to
inform a model of consumer search. The paper conducts tests between various
search protocols, and proposes and estimates a hybrid model of costly search
and discrete choice differentiated products to model demand for books.
One of
my most recent projects, Advertising
and Competition in Privatized Social Security: The Case of Mexico, with
Chad Syverson and Justine Hastings also focuses on information frictions. Here
we study the beginnings of Mexicos privatized social security system, where
the management fees charged by fund managers could be up to 27% of
contributions, despite there being 17 competing management companies. Through
detailed analysis of administrative data on individuals choices, we find that
the very higher level of the fees are rationalized by very low elasticity
displayed by households, who also display very high sensitivity to advertising
and sales efforts by plan providers. Thus, not surprisingly, the market became one
where instead of price competition, firms competed through their advertising
and sales efforts, shunting millions of people into extremely high cost
retirement rates. Using our estimated demand system, we study counterfactual
scenarios where advertising is regulated to be equal across firms, and find that
this may lead to significant cost savings.
The Organization of Firms
Last,
but not least, I have also worked on two projects pertaining to the
organizational structure of firms. The first of these papers, Cementing
Relationships: Vertical Integration, Foreclosure, Productivity, and Prices
with Chad Syverson, studies vertical mergers between cement and concrete
producers. Such mergers were vehemently opposed in the 60s and 70s, but the
Chicago School influence on vertical antitrust policy became much more
commonplace in the 80s and afterward. Using confidential Census data sets, we
find that the vertical mergers in this industry were, on average, efficiency enhancing,
leading to lower intermediate and final good prices and larger quantities.
In Vertical
Integration and Input Flows, we
dig deeper into the question of what a vertical integrated firm may be, and
to study goods shipments across plants belonging to the same firm. We find that, in a
large majority of firms that one may label as having vertically integrated
(which means that the firm owns a plant that produces a good that, according to
the 4-digit SIC based Input-Output tables, is used in the production of a good
produced in another plant belonging to the firm), there are no goods shipments between the firms
vertically related plants.
The
above result suggests that such vertical acquisitions after which there is no
explicit goods transfer between plants within the firm may be motivated by the
need to facilitate intangible input transfers, as suggested by Arrow and
Teece. Such intangibles may comprise of management and technology know-how,
marketing and distribution resources and assets, branding capital, etc. The
paper concludes with some suggestive evidence in agreement with the intangible
inputs theories of Arrow and Teece.